60 research outputs found
Improved Bounds for Online Preemptive Matching
When designing a preemptive online algorithm for the maximum matching
problem, we wish to maintain a valid matching M while edges of the underlying
graph are presented one after the other. When presented with an edge e, the
algorithm should decide whether to augment the matching M by adding e (in which
case e may be removed later on) or to keep M in its current form without adding
e (in which case e is lost for good). The objective is to eventually hold a
matching M with maximum weight.
The main contribution of this paper is to establish new lower and upper
bounds on the competitive ratio achievable by preemptive online algorithms:
1. We provide a lower bound of 1+ln 2~1.693 on the competitive ratio of any
randomized algorithm for the maximum cardinality matching problem, thus
improving on the currently best known bound of e/(e-1)~1.581 due to Karp,
Vazirani, and Vazirani [STOC'90].
2. We devise a randomized algorithm that achieves an expected competitive
ratio of 5.356 for maximum weight matching. This finding demonstrates the power
of randomization in this context, showing how to beat the tight bound of 3
+2\sqrt{2}~5.828 for deterministic algorithms, obtained by combining the 5.828
upper bound of McGregor [APPROX'05] and the recent 5.828 lower bound of
Varadaraja [ICALP'11]
Efficient Approximation Schemes for Stochastic Probing and Prophet Problems
Our main contribution is a general framework to design efficient polynomial
time approximation schemes (EPTAS) for fundamental classes of stochastic
combinatorial optimization problems. Given an error parameter ,
such algorithmic schemes attain a -approximation in only
time, where is some function that depends
only on . Technically speaking, our approach relies on presenting
tailor-made reductions to a newly-introduced multi-dimensional extension of the
Santa Claus problem [Bansal-Sviridenko, STOC'06]. Even though the
single-dimensional problem is already known to be APX-Hard, we prove that an
EPTAS can be designed under certain structural assumptions, which hold for our
applications.
To demonstrate the versatility of our framework, we obtain an EPTAS for the
adaptive ProbeMax problem as well as for its non-adaptive counterpart; in both
cases, state-of-the-art approximability results have been inefficient
polynomial time approximation schemes (PTAS) [Chen et al., NIPS'16; Fu et al.,
ICALP'18]. Turning our attention to selection-stopping settings, we further
derive an EPTAS for the Free-Order Prophets problem [Agrawal et al., EC'20] and
for its cost-driven generalization, Pandora's Box with Commitment [Fu et al.,
ICALP'18]. These results improve on known PTASes for their adaptive variants,
and constitute the first non-trivial approximations in the non-adaptive
setting.Comment: 33 page
Online Algorithms for Maximum Cardinality Matching with Edge Arrivals
In the adversarial edge arrival model for maximum cardinality matching, edges of an unknown graph are revealed one-by-one in arbitrary order, and should be irrevocably accepted or rejected. Here, the goal of an online algorithm is to maximize the number of accepted edges while maintaining a feasible matching at any point in time. For this model, the standard greedy heuristic is 1/2-competitive, and on the other hand, no algorithm that outperforms this ratio is currently known, even for very simple graphs.
We present a clean Min-Index framework for devising a family of randomized algorithms, and provide a number of positive and negative results in this context. Among these results, we present a 5/9-competitive algorithm when the underlying graph is a forest, and prove that this ratio is best possible within the Min-Index framework. In addition, we prove a new general upper bound of 2/(3+1/phi^2) ~ 0.5914 on the competitiveness of any algorithm in the edge arrival model. Interestingly, this bound holds even for an easier model in which vertices (along with their adjacent edges) arrive online, and when the underlying graph is a tree of maximum degree at most 3
Improved approximation guarantees for weighted matching in the semi-streaming model
We study the maximum weight matching problem in the semi-streaming model, and
improve on the currently best one-pass algorithm due to Zelke (Proc. of
STACS2008, pages 669-680) by devising a deterministic approach whose
performance guarantee is 4.91+epsilon. In addition, we study preemptive online
algorithms, a sub-class of one-pass algorithms where we are only allowed to
maintain a feasible matching in memory at any point in time. All known results
prior to Zelke's belong to this sub-class. We provide a lower bound of 4.967 on
the competitive ratio of any such deterministic algorithm, and hence show that
future improvements will have to store in memory a set of edges which is not
necessarily a feasible matching
Scheduling with Outliers
In classical scheduling problems, we are given jobs and machines, and have to
schedule all the jobs to minimize some objective function. What if each job has
a specified profit, and we are no longer required to process all jobs -- we can
schedule any subset of jobs whose total profit is at least a (hard) target
profit requirement, while still approximately minimizing the objective
function?
We refer to this class of problems as scheduling with outliers. This model
was initiated by Charikar and Khuller (SODA'06) on the minimum max-response
time in broadcast scheduling. We consider three other well-studied scheduling
objectives: the generalized assignment problem, average weighted completion
time, and average flow time, and provide LP-based approximation algorithms for
them. For the minimum average flow time problem on identical machines, we give
a logarithmic approximation algorithm for the case of unit profits based on
rounding an LP relaxation; we also show a matching integrality gap. For the
average weighted completion time problem on unrelated machines, we give a
constant factor approximation. The algorithm is based on randomized rounding of
the time-indexed LP relaxation strengthened by the knapsack-cover inequalities.
For the generalized assignment problem with outliers, we give a simple
reduction to GAP without outliers to obtain an algorithm whose makespan is
within 3 times the optimum makespan, and whose cost is at most (1 + \epsilon)
times the optimal cost.Comment: 23 pages, 3 figure
Maximum Load Assortment Optimization: Approximation Algorithms and Adaptivity Gaps
Motivated by modern-day applications such as Attended Home Delivery and
Preference-based Group Scheduling, where decision makers wish to steer a large
number of customers toward choosing the exact same alternative, we introduce a
novel class of assortment optimization problems, referred to as Maximum Load
Assortment Optimization. In such settings, given a universe of substitutable
products, we are facing a stream of customers, each choosing between either
selecting a product out of an offered assortment or opting to leave without
making a selection. Assuming that these decisions are governed by the
Multinomial Logit choice model, we define the random load of any underlying
product as the total number of customers who select it. Our objective is to
offer an assortment of products to each customer so that the expected maximum
load across all products is maximized. We consider both static and dynamic
formulations. In the static setting, a single offer set is carried throughout
the entire process of customer arrivals, whereas in the dynamic setting, the
decision maker offers a personalized assortment to each customer, based on the
entire information available at that time. The main contribution of this paper
resides in proposing efficient algorithmic approaches for computing
near-optimal static and dynamic assortment policies. In particular, we develop
a polynomial-time approximation scheme (PTAS) for the static formulation.
Additionally, we demonstrate that an elegant policy utilizing weight-ordered
assortments yields a 1/2- approximation. Concurrently, we prove that such
policies are sufficiently strong to provide a 1/4-approximation with respect to
the dynamic formulation, establishing a constant-factor bound on its adaptivity
gap. Finally, we design an adaptive policy whose expected maximum load is
within factor 1-\eps of optimal, admitting a quasi-polynomial time
implementation
Modeling the impact of changing patient transportation systems on peri-operative process performance in a large hospital: insights from a computer simulation study
Transportation of patients is a key hospital operational activity. During a large construction project, our patient admission and prep area will relocate from immediately adjacent to the operating room suite to another floor of a different building. Transportation will require extra distance and elevator trips to deliver patients and recycle transporters (specifically: personnel who transport patients). Management intuition suggested that starting all 52 first cases simultaneously would require many of the 18 available elevators. To test this, we developed a data-driven simulation tool to allow decision makers to simultaneously address planning and evaluation questions about patient transportation. We coded a stochastic simulation tool for a generalized model treating all factors contributing to the process as JAVA objects. The model includes elevator steps, explicitly accounting for transporter speed and distance to be covered. We used the model for sensitivity analyses of the number of dedicated elevators, dedicated transporters, transporter speed and the planned process start time on lateness of OR starts and the number of cases with serious delays (i.e., more than 15 min). Allocating two of the 18 elevators and 7 transporters reduced lateness and the number of cases with serious delays. Additional elevators and/or transporters yielded little additional benefit. If the admission process produced ready-for-transport patients 20 min earlier, almost all delays would be eliminated. Modeling results contradicted clinical managers’ intuition that starting all first cases on time requires many dedicated elevators. This is explained by the principle of decreasing marginal returns for increasing capacity when there are other limiting constraints in the system.National Science Foundation (U.S.) (DMS-0732175)National Science Foundation (U.S.) (CMMI-0846554)United States. Air Force Office of Scientific Research (FA9550-08-1-0369)Singapore-MIT AllianceMassachusetts Institute of Technology. Buschbaum Research Fund
- …